gusucode.com > 支持向量机工具箱 - LIBSVM OSU_SVM LS_SVM源码程序 > 支持向量机工具箱 - LIBSVM OSU_SVM LS_SVM\stprtool\linear\anderson\oanders.m

    function [alpha,theta,solution,t,lambda,gamma,maxerr]=...
   oanders(MI,SIGMA,J,tmax,delta,t,lambda)
% OANDERS solves the original Anderson's task, two Gaussians.
% [alpha,theta,solution,t,lambda,gamma,maxerr]=...
%    oanders(MI,SIGMA,J,tmax,delta,t,lambda)
%
% OANDERS solves Anderson-Bahadur's task. The goal is to separate two 
%   classes in order to minimize probability of bad classification. 
%   Each class is described by conditional probability density function p(x|k) 
%   (where x is observation and k is class 1 or 2) which has normal 
%   distribution. 
%
% Input:
% (notation: N is dimension of feature space)
% OANDERS(MI,SIGMA,J,tmax,delta)
%   MI [Nx2] matrix containing two column vectors of mean values 
%      MI = [mi_1,mi_2].
%   SIGMA [Nx(2*N)] matrix containing two square covariance matrices N-by-N,
%      SIGMA = [Sigma1,Sigma2].
%   J [1x2] vector containing class labels (1 and 2) for pairs of arguments 
%      {mi_i,sigma_i}. (Note: the vector J can contain one of following 
%      combinatins J=[1,2] or J=[2,1]).
%   tmax [1x1] is maximal number of steps of the algorithm. Acceptable
%      value is positive integer or inf (which means infinity).
%   delta [1x1] is positive real number which determines accuracy of finding
%      solution (note: setting delta=0 means that the most accurate solution 
%      will be finding). The algorithm works until desired accuracy is
%      achieved or maximal step number is exceeded.
%
% OANDERS(MI,SIGMA,J,tmax,delta,t,lambda) begins from state given by
%   t [1x1] is initial step number.
%   lambda [1x1] real number which is state variable of the algorithm 
%
% Output:
%   alpha [Nx1] is normal vector of found separation hyperplane.
%   theta [1x1] is threshold of found separation hyperplane.
%   solution [1x1] contains 0 if solution is not found (step number exceeded)
%                  contains 1 is solution is found
%   t [1x1] is step number in which the algorithm halted.
%   lambda [1x1], gamma [1x1] are status variables of the algorithm.
%   maxerr [1x1] is upper bound of probability of bad classification.
%
% See also GANDERS, GANDERS2, EANDERS, GGANDERS.
%

% Statistical Pattern Recognition Toolbox, Vojtech Franc, Vaclav Hlavac
% (c) Czech Technical University Prague, http://cmp.felk.cvut.cz
% Written Vojtech Franc (diploma thesis) 04.11.1999, 6.5.2000
% Modifications
% 24. 6.00 V. Hlavac, comments polished.
% 1. 8. 00 V. Franc, comments changed

% default setting
if nargin < 6,
   t=0;
end

% determines precision of result (0 the most precise)
if nargin < 5,
   delta=0;
end

if nargin < 4,
  tmax=inf;
end

if nargin < 3,
  error('Not enought input arguments.');
end


% if the function is called for first 
if t==0,
   lambda=0.5;
   t=1;
end

% get dimension and number of the distributions
N=size(MI,1);
K=size(MI,2);

% get mi and sigma for the first and second class 
class1=0;
class2=0;
for i=1:K,
   if J(i)==1 & class1==0,
      class1=1;
      mi1=MI(:,i);
      sg1=SIGMA(:,(i-1)*N+1:i*N);
   elseif J(i)==2 & class2==0,
      class2=1;
      mi2=MI(:,i);
      sg2=SIGMA(:,(i-1)*N+1:i*N);
   end
end

% start iterations
solution=0;
while tmax > 0 & solution==0,
   tmax=tmax-1;

   % perform one iteration
   alpha=inv((1-lambda)*sg1+lambda*sg2)*(mi1-mi2);

   gamma=sqrt((alpha'*sg2*alpha)/(alpha'*sg1*alpha));

   % if the last change of gamma is smaller then delta then
   if abs( gamma - (1-lambda)/lambda ) < delta,
      solution=1;
      t=t;
   else
      % updtate lambda
      t=t+1;
      lambda=1/(1+gamma);
   end

end

% compute theta
theta=lambda*(alpha'*sg2*alpha)+(alpha'*mi2);

% upper bound of probability of bad classification
maxerr=andrerr(MI,SIGMA,J,alpha,theta);